85 research outputs found
Layer-Wise Feedback Alignment is Conserved in Deep Neural Networks
In the quest to enhance the efficiency and bio-plausibility of training deep
neural networks, Feedback Alignment (FA), which replaces the backward pass
weights with random matrices in the training process, has emerged as an
alternative to traditional backpropagation. While the appeal of FA lies in its
circumvention of computational challenges and its plausible biological
alignment, the theoretical understanding of this learning rule remains partial.
This paper uncovers a set of conservation laws underpinning the learning
dynamics of FA, revealing intriguing parallels between FA and Gradient Descent
(GD). Our analysis reveals that FA harbors implicit biases akin to those
exhibited by GD, challenging the prevailing narrative that these learning
algorithms are fundamentally different. Moreover, we demonstrate that these
conservation laws elucidate sufficient conditions for layer-wise alignment with
feedback matrices in ReLU networks. We further show that this implies
over-parameterized two-layer linear networks trained with FA converge to
minimum-norm solutions. The implications of our findings offer avenues for
developing more efficient and biologically plausible alternatives to
backpropagation through an understanding of the principles governing learning
dynamics in deep networks.Comment: 8 pages, 2 figure
- …